11 research outputs found

    Adaptive construction of surrogate functions for various computational mechanics models

    Get PDF
    In most science and engineering fields, numerical simulation models are often used to replicate physical systems. An attempt to imitate the true behavior of complex systems results in computationally expensive simulation models. The models are more often than not associated with a number of parameters that may be uncertain or variable. Propagation of variability from the input parameters in a simulation model to the output quantities is important for better understanding the system behavior. Variability propagation of complex systems requires repeated runs of costly simulation models with different inputs, which can be prohibitively expensive. Thus for efficient propagation, the total number of model evaluations needs to be as few as possible. An efficient way to account for the variations in the output of interest with respect to these parameters in such situations is to develop black-box surrogates. It involves replacing the expensive high-fidelity simulation model by a much cheaper model (surrogate) using a limited number of the high-fidelity simulations on a set of points called the design of experiments (DoE). The obvious challenge in surrogate modeling is to efficiently deal with simulation models that are expensive and contains a large number of uncertain parameters. Also, replication of different types of physical systems results in simulation models that vary based on the type of output (discrete or continuous models), extent of model output information (knowledge of output or output gradients or both), and whether the model is stochastic or deterministic in nature. All these variations in information from one model to the other demand development of different surrogate modeling algorithms for maximum efficiency. In this dissertation, simulation models related to application problems in the field of solid mechanics are considered that belong to each one of the above-mentioned classes of models. Different surrogate modeling strategies are proposed to deal with these models and their performance is demonstrated and compared with existing surrogate modeling algorithms. The developed algorithms, because of their non-intrusive nature, can be easily extended to simulation models of similar classes, pertaining to any other field of application

    Surrogate modeling and model selection in irreducible high dimensions with small sample size

    Get PDF
    There exist a number of high dimensional problems in which the dimensions cannot be effectively reduced, since all of them are more or less equally important. On top of that, when the computational models are expensive, it is not practical to perform more than a small number of model evaluations. In situations like this, a good space filling design is needed that provides maximum coverage over the input domain. In surrogate modeling methods, like kriging interpolation or radial basis function interpolation, a good sampling design can help improve the condition number of the kernel matrix by placing samples as far apart from each other as possible. In this study, the performance of three hierarchical space filled designs, namely Refined Latinized Stratified Sampling (RLSS), Hierarchical Latin Hypercube Sampling (HLHS) and Sobol quasi-random sequence, are compared using the Rosenbrock function in different dimensions. Ordinary kriging interpolation is chosen as the surrogate modeling method with different choices of correlation functions. The AIC criterion is used for model selection and the accuracy of selection is cross-verified using the root mean squared (RMS) error values

    Application of probabilistic modeling and automated machine learning framework for high-dimensional stress field

    Full text link
    Modern computational methods, involving highly sophisticated mathematical formulations, enable several tasks like modeling complex physical phenomenon, predicting key properties and design optimization. The higher fidelity in these computer models makes it computationally intensive to query them hundreds of times for optimization and one usually relies on a simplified model albeit at the cost of losing predictive accuracy and precision. Towards this, data-driven surrogate modeling methods have shown a lot of promise in emulating the behavior of the expensive computer models. However, a major bottleneck in such methods is the inability to deal with high input dimensionality and the need for relatively large datasets. With such problems, the input and output quantity of interest are tensors of high dimensionality. Commonly used surrogate modeling methods for such problems, suffer from requirements like high number of computational evaluations that precludes one from performing other numerical tasks like uncertainty quantification and statistical analysis. In this work, we propose an end-to-end approach that maps a high-dimensional image like input to an output of high dimensionality or its key statistics. Our approach uses two main framework that perform three steps: a) reduce the input and output from a high-dimensional space to a reduced or low-dimensional space, b) model the input-output relationship in the low-dimensional space, and c) enable the incorporation of domain-specific physical constraints as masks. In order to accomplish the task of reducing input dimensionality we leverage principal component analysis, that is coupled with two surrogate modeling methods namely: a) Bayesian hybrid modeling, and b) DeepHyper's deep neural networks. We demonstrate the applicability of the approach on a problem of a linear elastic stress field data.Comment: 17 pages, 16 figures, IDETC Conference Submissio

    An efficient optimization based microstructure reconstruction approach with multiple loss functions

    Full text link
    Stochastic microstructure reconstruction involves digital generation of microstructures that match key statistics and characteristics of a (set of) target microstructure(s). This process enables computational analyses on ensembles of microstructures without having to perform exhaustive and costly experimental characterizations. Statistical functions-based and deep learning-based methods are among the stochastic microstructure reconstruction approaches applicable to a wide range of material systems. In this paper, we integrate statistical descriptors as well as feature maps from a pre-trained deep neural network into an overall loss function for an optimization based reconstruction procedure. This helps us to achieve significant computational efficiency in reconstructing microstructures that retain the critically important physical properties of the target microstructure. A numerical example for the microstructure reconstruction of bi-phase random porous ceramic material demonstrates the efficiency of the proposed methodology. We further perform a detailed finite element analysis (FEA) of the reconstructed microstructures to calculate effective elastic modulus, effective thermal conductivity and effective hydraulic conductivity, in order to analyse the algorithm's capacity to capture the variability of these material properties with respect to those of the target microstructure. This method provides an economic, efficient and easy-to-use approach for reconstructing random multiphase materials in 2D which has the potential to be extended to 3D structures
    corecore